The ethics of algorithmic regulation

0

In my last three posts on data ethics, I explored a few of the ethical dilemmas in our data-driven world. From examining the ethical practices of free internet service providers to the problem of high-frequency trading, I’ve come to realize the depth and complexity of these issues. Anyone who's aware of these issues would want to seek answers. Some have suggested government intervention, but is it even possible for governments to regulate such fast moving and rapidly evolving technology?

Evgeny Morozov, author of the book To Save Everything, Click Here: The Folly of Technological Solutionism, recently wrote an interesting Guardian article, The Rise of Data and the Death of Politics, that explored the ethical implications of the new data-driven approach to governance known as algorithmic regulation.

To see it at work, Morozov explained, “look no further than the spam filter in your email. Instead of confining itself to a narrow definition of spam, the email filter has its users teach it. Even Google can’t write rules to cover all the ingenious innovations of professional spammers. What it can do, though, is teach the system what makes a good rule and spot when it’s time to find another rule for finding a good rule—and so on. An algorithm can do this, but it’s the constant real-time feedback from its users that allows the system to counter threats never envisioned by its designers. And it’s not just spam: your bank uses similar methods to spot credit-card fraud.”

Algorithmic regulation is enabled by the unrelenting trend of the smartification of everything (i.e., smart phones, smart cars, smart appliances, etc.). Morozov used a wide variety of examples in his article, but the one that best exemplifies an ethical dilemma is a near-future dominated by fitness and health tracking devices. The reason is not how these devices will help us monitor our fitness and health, but how they will help companies monetize our fitness and health. To paraphrase Tim O’Reilly, just as advertising became the native business model for the internet, insurance will be the native business model for the internet of things. Meaning that your health insurance premiums would be determined by these devices monitoring whether you really did give up smoking, are drinking less alcohol, are staying slim, and are keeping your blood pressure under control.

“The unstated assumption,” Morozov explained, “is that the unhealthy are not only a burden to society but that they deserve to be punished (fiscally for now) for failing to be responsible. For what else could possibly explain their health problems but their personal failings? It’s certainly not the power of food companies or class-based differences or various political and economic injustices. One can wear a dozen powerful sensors, own a smart mattress and even do a close daily reading of one’s poop—as some self-tracking aficionados are wont to do—but those injustices would still be nowhere to be seen, for they are not the kind of stuff that can be measured with a sensor. The devil doesn’t wear data [emphasis mine]. Social injustices are much harder to track than the everyday lives of the individuals whose lives they affect.”

While algorithms work well as spam filters for our email, I doubt algorithms can act as ethics filters to regulate our increasingly data-driven world. What do you think? Share your thoughts below.

Share

About Author

Jim Harris

Blogger-in-Chief at Obsessive-Compulsive Data Quality (OCDQ)

Jim Harris is a recognized data quality thought leader with 25 years of enterprise data management industry experience. Jim is an independent consultant, speaker, and freelance writer. Jim is the Blogger-in-Chief at Obsessive-Compulsive Data Quality, an independent blog offering a vendor-neutral perspective on data quality and its related disciplines, including data governance, master data management, and business intelligence.

Leave A Reply

Back to Top